Auto-Generated Summary

> Provide an extensive summary of the agreements by topic. No need to mention CRs.

General Aspects of AI/ML Framework

  1. Model Identification and Life Cycle Management (LCM): Model-ID can be used for LCM operations, with additional conditions categorized into NW-side and UE-side. Consistency between training and inference is ensured through model identification, transfer, and monitoring.
  2. Data Collection Requirements: Data size and latency requirements are defined for training, inference, and monitoring across use cases like CSI compression, beam management, and positioning.
  3. Complexity Metrics: Computational complexity metrics such as FLOPs, model size, and number of parameters are adopted for evaluation.
  4. Generalization and Scalability: Generalization performance is evaluated across deployment scenarios, configurations, and UE parameters. Fine-tuning/re-training improves performance for new scenarios but may degrade performance for previous ones.

CSI Feedback Enhancement

  1. Inference Procedure: Examples of inference procedures for CSI compression and prediction are provided, including pre-processing and post-processing steps.
  2. Training Collaboration Types: Pros and cons of training collaboration types (Type 1, Type 2, Type 3) are summarized, including flexibility, extendibility, and compatibility.
  3. Monitoring Mechanisms: Intermediate KPI monitoring mechanisms are defined, including options for calculating KPI differences and monitoring accuracy.
  4. Baseline Assumptions: Baseline simulation assumptions for CSI feedback enhancement evaluations are outlined, including duplex mode, carrier frequency, bandwidth, and channel estimation.

Beam Management

  1. Representative Sub-Use Cases: Spatial-domain DL beam prediction (BM-Case1) and temporal DL beam prediction (BM-Case2) are studied.
  2. Performance Results: AI/ML achieves good beam prediction accuracy with reduced RS/measurement overhead. Measurement errors and quantization impact accuracy but AI/ML outperforms non-AI baselines.
  3. Generalization and Realistic Considerations: Generalization performance is evaluated for unseen scenarios, configurations, and UE parameters. Realistic considerations such as measurement errors and quantization are included in evaluations.
  4. Monitoring Metrics: Metrics include beam prediction accuracy, link quality KPIs (e.g., throughput, L1-RSRP), and input/output data distribution.

Positioning Accuracy Enhancement

  1. Evaluation Assumptions and Methodology: Both direct AI/ML positioning and AI/ML-assisted positioning are evaluated using one-sided models. Model input types include CIR, PDP, and DP, with varying dimensions.
  2. Performance Results: AI/ML significantly improves positioning accuracy compared to RAT-dependent methods, achieving <1m accuracy in indoor factory scenarios.
  3. Fine-Tuning Observations: Fine-tuning improves performance for new deployment scenarios but may degrade performance for previous ones. Dataset size requirements depend on the similarity between scenarios.
  4. Monitoring Methods: Label-based and label-free monitoring methods are feasible for model performance evaluation.

Remaining Aspects

  1. Finalization of Text Proposals: Text proposals for TR 38.843 include detailed descriptions of evaluation assumptions, methodology, KPIs, and performance results for all use cases.
  2. Recommendations for Normative Work: Both BM-Case1 and BM-Case2 are recommended for normative work, along with necessary signaling/mechanisms for data collection, inference, and monitoring.
  3. Model Complexity: Complexity metrics for beam management and CSI feedback enhancement are summarized, including model parameters, size, and computational complexity.

This summary captures the high-level agreements and observations across various topics studied in the document.